In the previous post there is a description of dynamically adding a node, this time explains how to dynamically delete a node.In the previous post there is a tutorial on how to limit the connection of a node, to dynamically delete a node, you can configure on this basis.1. Configure the Dfs.hosts.exclude on the host1Add Host4 to the file specified by Dfs.hosts.exclude:Then execute the following command:hadoop dfsadmin -refreshNodesThen use the following command to view:hadoop dfsadmin -repor
A virtual machine was started on Shanda cloud. The default user is root. An error occurred while running hadoop:
[Error description]
Root @ snda:/data/soft/hadoop-0.20.203.0 # bin/hadoop FS-put conf Input11/08/03 09:58:33 warn HDFS. dfsclient: datastreamer exception: Org. apache. hadoop. IPC. remoteException: Java. io.
When Hadoop was started today, it was discovered that Datanode could not boot, and the following errors were found in the View log: Java.io.ioexception:file/opt/hadoop/tmp/mapred/system/jobtracker.info could only is replicated to 0 nodes, instead o F 1 at Org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock (fsnamesystem.java:1271) at Org.apac
Exception Analysis
1. "cocould only be replicated to 0 nodes, instead of 1" Exception
(1) exception description
The configuration above is correct and the following steps have been completed:
[Root @ localhost hadoop-0.20.0] # bin/hadoop namenode-format
[Root @ localhost hadoop-0.20.0] # bin/start-all.sh
At this time,
Hadoop version: hadoop-2.5.1-x64.tar.gz
The study referenced the Hadoop build process for the two nodes of the http://www.powerxing.com/install-hadoop-cluster/, I used VirtualBox to open four Ubuntu (version 15.10) virtual machines, build four
4. Modify config file vi hadoop-2.6.0/etc/hadoop/hdfs-site.xml, add (Hadoop user action)
5. Modify config file vi etc/hadoop/mapred-site.xml (Hadoop user action)
This file does not need to be copied in a copy (CD
Hadoop Error "could only is replicated to 0 nodes, instead of 1"root@scutshuxue-desktop:/home/root/hadoop-0.19.2# bin/hadoop fs-put conf input10/07/18 12:31:05 INFO HDFs. Dfsclient:org.apache.hadoop.ipc.remoteexception:java.io.ioexception:file/user/root/input/log4j.properties could Only being replicated to 0
, the file is conf/ Hdfs-site.xml, the parameter names are: Dfs.namenode.hosts and fs.namenode.hosts.exclude.Parameter action: dfs.hosts: Allow access to the list of Datanode machines, if not configured or the specified list file is empty, by default allows all hosts to become Datanode Dfs.hosts.exclude: Deny access to the list of machines for Datanode, If a machine appears in two lists at the same time, it is rejected. Their essential role is to deny Datanode process connections on some
Add, delete, and delete nodes in Hadoop. 1. Modify host as normal datanode. Add namenode ip2. modify namenode configuration file confslaves add ip or host3. on the machine of the new node, start the service [root @ slave-004hadoop] #. binhadoop-daemon.shstartdatanode [root @ slav
Add, delete, and delete nodes in Hadoop
task will still be sent to them if they are normal.1, change the tasktracker-deny.list on master, add the corresponding machine 2. Refresh node configuration on master: Hadoop mradmin-refreshnodesAt this point in the Web UI, you can see that the number of nodes is reduced immediately. And the number of exclude nodes has been added. Be able to click in detail to
This article address: http://blog.csdn.net/kongxx/article/details/6892675
After installing the official documentation for Hadoop, run in pseudo distributed mode
Bin/hadoop fs-put conf Input
The following exception occurred
11/10/20 08:18:22 WARN HDFs. Dfsclient:datastreamer exception:org.apache.hadoop.ipc.remoteexception:java.io.ioexception:file/user/fkong/ Input/conf/slaves could only is replicated to 0
For example, if the ip address of the newly added node is 192.168.1.xxx, add the hosts of 192.168.1.xxxdatanode-xxx to all nn and dn nodes, create useraddhadoop-sbinbash-m on xxx, and add the ip address of another dn. all files in ssh are copied to homehadoop on xxx. install jdkapt-getinstallsun-java6-j in ssh path
For example, if the ip address of the newly added node is 192.168.1.xxx, add the hosts of 192.168.1.xxx datanode-xxx to all nn and dn
Filter nodes inaccessible to Hadoop using Shell scripts
The hp1 cluster recently used, because the maintenance staff of the cluster is not powerful, the node will always drop one or two after a while. Today, we found that HDFS is in protection mode when Hadoop is restarted.
I decided to filter out all the inaccessible nodes
Hadoop solves the importance of nodes in a non-pointing graph by solving the number of triangles in the node to show:It is important to solve the importance of the nodes in the graph, and to sort them in big data, distribute the data of large graph organization, find the important nodes, and make special treatment to t
Uploading files using Hadoop HDFs dfs-put XXX17/12/08 17:00:39 WARN HDFs. Dfsclient:datastreamer Exceptionorg.apache.hadoop.ipc.RemoteException (java.io.IOException): file/user/sanglp/ Hadoop-2.7.4.tar.gz._copying_ could only is replicated to 0 nodes instead of minreplication (=1). There is 0 Datanode (s) running and no node (s) is excluded in this operat
Yesterday because Datanode appeared large-scale offline situation, the preliminary judgment is dfs.datanode.max.transfer.threads parameter set too small. the hdfs-site.xml configuration files for all Datanode nodes are then adjusted. After restarting the cluster, in order to verify, try to run a job, see the configuration of the job in Jobhistory, it is surprising that the display is still the old value, that is, the job is still running with the old
Hadoop Add and Remove nodes
A Adding nodes
(a) There are two ways to add a node, one is static add, close the Hadoop cluster, configure the appropriate configuration, restart the cluster (this will not be restated)
(b) Dynamically added, adding nodes without restarting the c
HDFs Add Delete nodes and perform HDFs balance
Mode 1: Static add Datanode, stop Namenode mode
1. Stop Namenode
2. Modify the slaves file and update to each node
3. Start Namenode
4. Execute the Hadoop balance command. (This is used for the balance cluster and is not required if you are just adding a node)
-----------------------------------------
Mode 2: Dynamically add Datanode, keep Namenode way
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.